Search Results: "dod"

23 March 2014

Dominique Dumont: Easier Lcdproc package upgrade with automatic configuration merge

Hello This blog explains how next lcdproc package provide easier upgrader with automatic configuration merge. Here s the current situation: lcdproc is shipped with several configuration files, including /etc/LCDd.conf. This file is modified upstream at every lcdproc release to bring configuration for new lcdproc drivers. On the other hand, this file is always customized to suit the specific hardware of the user s system. So upgrading a package will always lead to a conflict during upgrade. User will always be required to choose whether to use current version or upstream version. Next version of libconfig-model-lcdproc-perl will propose user whether to perform automatic merge of the configuration: upstream change are taken into account while preserving user change. The configuration upgrade shown is based on Config::Model can be applied to other package. Current lcdproc situation To function properly, lcdproc configuration must always be adapted to suit the user s hardware. On the following upgrade, upstream configuration is often updated so user will often be shown this question:
Configuration file '/etc/LCDd.conf'
 ==> Modified (by you or by a script) since installation.
 ==> Package distributor has shipped an updated version.
   What would you like to do about it ?  Your options are:
    Y or I  : install the package maintainer's version
    N or O  : keep your currently-installed version
      D     : show the differences between the versions
      Z     : start a shell to examine the situation
 The default action is to keep your current version.
*** LCDd.conf (Y/I/N/O/D/Z) [default=N] ?
This question is asked in the middle of an upgrade and can be puzzling for an average user. Next package with automatic merge Starting from lcdproc 0.5.6, the configuration merge is handled automatically by the packaging script with the help of Config::Model::Lcdproc. When lcdproc is upgraded to 0.5.6, the following changes are visible:
* lcdproc depends on libconfig-model-lcdproc-perl
* user is asked once by debconf whether to use automatic configuration upgrades or not.
* no further question are asked (no ucf style questions). For instance, here s an upgrade from lcdproc_0.5.5 to lcdproc_0.5.6:
$ sudo dpkg -i lcdproc_0.5.6-1_amd64.deb 
(Reading database ... 322757 files and directories currently installed.)
Preparing to unpack lcdproc_0.5.6-1_amd64.deb ...
Stopping LCDd: LCDd.
Unpacking lcdproc (0.5.6-1) over (0.5.5-3) ...
Setting up lcdproc (0.5.6-1) ...
Changes applied to lcdproc configuration:
- server ReportToSyslog: '' -> '1' # use standard value
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
Starting LCDd: LCDd.
Processing triggers for man-db (2.6.6-1) ...
Note: the automatic upgrade currently applies only to LCDd.conf. The other configuration files of lcdproc are handled the usual way. Other benefits User will also be able to:
* check lcdproc configuration with sudo cme check lcdproc
* edit the configuration with a GUI (see Managing Lcdproc configuration with cme for more details) Here s a screenshot of the GUI: GUI to edit lcdproc configuration More information * libconfig-model-lcdproc-perl package page. This package provides a configuration model for lcdproc.
* This blog explains how this model is generated from upstream LCDd.conf.
* How to adapt a package to perform configuration upgrade with Config::Model Next steps Automatic configuration merge can be applied to other packages. But my free time is already taken by the maintenance of Config::Model and the existing models, there s no room for me to take over another package. On the other hand, I will definitely help people who want to provide automatic configuration merge on their packages. Feel free to contact me on:
* config-model-user mailing list
* debian-perl mailing list (where Config::Model is often used to maintain debian package file with cme)
* #debian-perl IRC channel All the best
Tagged: config-model, configuration, debian, Perl, upgrade

11 March 2014

Steve Langasek: My CuBox-i has arrived

A couple of weeks ago, Gunnar Wolf mentioned on IRC that his CuBox-i4 had arrived. This resulted in various jealous noises from me; having heard about this device making the rounds at the Kernel Summit, I ordered one for myself back in December, as part of the long-delayed HDification of our home entertainment system and coinciding with the purchase of a new Samsung SmartTV. We've been running an Intel Coppermine Celeron for a decade as a MythTV frontend and encoder (hardware-assisted with a PVR-250), which is fine for SD video, but really doesn't cut it for anything HD. So after finally getting a TV that would showcase HD in all its glory, I figured it was time to upgrade from an S-Video-out, barely-limping-along tower machine to something more modern with HDMI out, eSATA, hardware video decoding, and whose biggest problem is it's so small that it threatens to get lost in the wiring! Since placing the order, I've been bemused to find that the SmartTV is so smart that it has had a dramatic impact on how we consume media; between that and our decision to not be a boiled frog in the face of DISH Network's annual price increase, the MythTV frontend has become a much less important part of our entertainment center, well before I ever got a chance to lay hands on the intended replacement hardware. But that's a topic for another day. Anyway, the CuBox-i4 finally arrived in the mail on Friday, so of course I immediately needed to start hacking on it! Like Gunnar, who wrote last week about his own experience getting a "proper" Debian install on the box, I'm not content with running a binary distribution image prepared by some third party; I expect my hardware toys to run official distro packages assembled using official distro tools and, if at all possible, distributed on official distro images for a minimum of hassle. Whereas Gunnar was willing to settle for using third-party binaries for the bootloader and kernel, however, I'm not inclined to do any such thing. And between my stint at Linaro a few years ago and the recent work on Ubuntu for phones, I do have a little knowledge of Linux on ARM (probably just enough to be dangerous), so I set to work trying to get the CuBox-i4 bootable with stock Debian unstable. Being such a cutting-edge piece of hardware, that does pose some challenges. Support for the i.MX6 chip is in the process of being upstreamed to U-Boot, but the support for the CuBox-i devices isn't there yet, nor is the support for SPL on i.MX6 (which allows booting the variants of the CuBox-i with a single U-Boot build, instead of requiring a different bootloader build for each flavor). The CuBox-i U-Boot that SolidRun makes available (with source at github) is based on U-Boot 2013.10-rc4, so more than a full release behind Debian unstable, and the patches there don't apply to U-Boot 2014.01 without a bit of effort. But if it's worth doing, it's worth doing right, so I've taken the time to rebase the CuBox-i patches on top of 2014.01, publishing the results of the rebase to my own github repository and submitting a bug to the Debian U-Boot maintainers requesting its inclusion. The next step is to get a Debian kernel that not only works, but fully supports the hardware out of the box (a 3.13 generic arm kernel will boot on the machine, but little things like ethernet and hdmi don't work yet). I've created a page in the Debian wiki for tracking the status of this work.

3 March 2014

Andrew Pollock: [life] Day 34, Kindergarten, recovery, carpentry

I was pretty knackered today. I should have done some Debian stuff, but I just didn't have it in me, and I had a backlog of household stuff to get done. Zoe woke up at 4am, and I didn't have the energy to try and get her to go back to sleep in her bed, so I just let her sleep in my bed. We both slept in until 7am, which was nice. As a result, we didn't have as much time to have a slow start, and Zoe was a bit grumpy and uncooperative as a result. I think there was at least one meltdown before breakfast. It conveniently rained around the time we were ready to leave, so we drove to Kindergarten. Drop off was super smooth. She pretty much waved me off as soon as we got there. I picked up a couple of packages from the post office on the way home, and then started hanging out the washing before I had to go back to see the podiatrist to get my orthotics fitted in my new running shoes. I finished hanging out the washing and putting away stuff from the Coochie trip and Melbourne trip and had some lunch. After lunch I started work on making a little step for Zoe so she can turn the light off in her bedroom. I had enough material left over from the clothes lines I made for her to make a really dodgy little "stool". It rained again around pick up time, so I drove to Kindergarten again, and picked up Zoe. She'd just woken up from a nap before I got there and was in a good mood. Megan's Dad was picking up Megan on foot, and they were going to have a coffee at the local coffee shop, so we joined them. After that, we went to the post office to check my post office box. I had a cheque that needed to be banked, so we went to the bank and the supermarket, and by then it was pretty much time for Sarah to pick up Zoe from me. I did a bit more carpentry before I lost the light, and went to yoga.

13 February 2014

Craig Small: Google doesn t get SPF

Someone has decided to use my email address for a spam source. They have even used google to relay it which, given Googles current policies seems like a winning idea. I keep getting emails from Google s servers with header lines like this:
X-Original-Authentication-Results: mx.google.com; spf=hardfail (google.com: domain of csmall@small.dropbear.id.au does not designate 66.80.26.66 as permitted sender)
You don t say? You mean even though my SPF records do not include some dodgy server in California, even though Google knows I don t include this in my SPF records well we will let the email go through anyhow. SPF records mean that s where my email comes from. If the record has a -all at the end of it, like mine do, then it means don t accept it from anywhere else. The hardfail means Google sees the -all and still does nothing about it.

3 February 2014

Kees Cook: compiler hardening in Ubuntu and Debian

Back in 2006, the compiler in Ubuntu was patched to enable most build-time security-hardening features (relro, stack protector, fortify source). I wasn t able to convince Debian to do the same, so Debian went the route of other distributions, adding security hardening flags during package builds only. I remain disappointed in this approach, because it means that someone who builds software without using the packaging tools on a non-Ubuntu system won t get those hardening features. Think of a sysadmin trying the latest nginx, or a vendor like Valve building games for distribution. On Ubuntu, when you do that ./configure && make you ll get the features automatically. Debian, at the time, didn t have a good way forward even for package builds since it lacked a concept of global package build flags . Happily, a solution (via dh) was developed about 2 years ago, and Debian package maintainers have been working to adopt it ever since. So, while I don t think any distro can match Ubuntu s method of security hardening compiler defaults, it is valuable to see the results of global package build flags in Debian on the package archive. I ve had an on-going graph of the state of build hardening on both Ubuntu and Debian for a while, but only recently did I put together a comparison of a default install. Very few people have all the packages in the archive installed, so it s a bit silly to only look at the archive statistics. But let s start there, just to describe what s being measured. Here s today s snapshot of Ubuntu s development archive for the past year (you can see development opening after a release every 6 months with an influx of new packages):
Here s today s snapshot of Debian s unstable archive for the past year (at the start of May you can see the archive unfreezing after the Wheezy release; the gaps were my analysis tool failing):
Ubuntu s lines are relatively flat because everything that can be built with hardening already is. Debian s graph is on a slow upward trend as more packages get migrated to dh to gain knowledge of the global flags. Each line in the graphs represents the count of source packages that contain binary packages that have at least 1 hit for a given category. ELF is just that: a source package that ultimately produces at least 1 binary package with at least 1 ELF binary in it (i.e. produces a compiled output). The Read-only Relocations ( relro ) hardening feature is almost always done for an ELF, excepting uncommon situations. As a result, the count of ELF and relro are close on Ubuntu. In fact, examining relro is a good indication of whether or not a source package got built with hardening of any kind. So, in Ubuntu, 91.5% of the archive is built with hardening, with Debian at 55.2%. The stack protector and fortify source features depend on characteristics of the source itself, and may not always be present in package s binaries even when hardening is enabled for the build (e.g. no functions got selected for stack protection, or no fortified glibc functions were used). Really these lines mostly indicate the count of packages that have a sufficiently high level of complexity that would trigger such protections. The PIE and immediate binding ( bind_now ) features are specifically enabled by a package maintainer. PIE can have a noticeable performance impact on CPU-register-starved architectures like i386 (ia32), so it is neither patched on in Ubuntu, nor part of the default flags in Debian. (And bind_now doesn t make much sense without PIE, so they usually go together.) It s worth noting, however, that it probably should be the default on amd64 (x86_64), which has plenty of available registers. Here is a comparison of default installed packages between the most recent stable releases of Ubuntu (13.10) and Debian (Wheezy). It s clear that what the average user gets with a default fresh install is better than what the archive-to-archive comparison shows. Debian s showing is better (74% built with hardening), though it is still clearly lagging behind Ubuntu (99%):

2014, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

7 January 2014

Vincent Sanders: Healthy discontent is the prelude to progress

Back in the mists of Internet time (2002) when spirits were brave, the stakes were high, men were real men, women were real women and small furry creatures from Alpha Centauri were real small furry creatures from Alpha Centauri there were few hosting options for a budding open source project.

Despite the meteoric rise of many dot com companies in the early noughties a project either had to go it alone by running everything themselves or pick from a small selection of companies that had not yet worked out how to turn free hosting into money.

One such company, arguably the first, was VA Research with their SourceForge system. Many projects used their platform including a small niche web browser called NetSurf. For years the service was great, there were rough edges but nothing awful.

Netsurf issue tracker in the new SourceForge interface
Over time the NetSurf projects requirements grew beyond what SourceForge could provide and service after service was migrated away eventually all that was left was the bug tracking system. This remained the state of affairs until mid 2013 when SourceForge forced a migration to their new platform which made them unsuitable for the projects use case.

Aside from the issue trackers questionable user interface SourceForge had started aggressively placing advertising throughout their platform. Some of the placements so inadvisable that projects started taking the decision to leave.

While I appreciate that SourceForge had to make money to provide a service they appear to have sown discontent within a large part of their user base without understanding that there are a number of alternatives solutions with a much less onerous funding model.

NetSurf used this as an opportunity to move the remaining issue tracking service to our own infrastructure. Rob Kendrick proceeded to evaluate several solutions and in December 2013 I finally found the time to migrate an XML dump of the old data from SourceForge into MantisBT.

Migrating data from one database into another via incompatible formats took me back to my roots. My early career started with programming tasks moving historical business data from ancient large systems which were about to be scrapped to modern Sparc based systems. Later I would be in a role where financial data needed to be retrieved from obsolete proprietary systems and moved into databases on x86 servers.

My experience in this field was not really stretched as it turns out that modern systems can process a few tens of megabytes in seconds rather than the days for a run in my youth! So some ugly perl scripts and a few hours later I had a nice shiny SQL database filled with NetSurfs bugs and MantisBT instance configured to use them.

NetSurf MantisBT instance showing most recently updated open bugs
Then came the hard bit, triaging all the open bugs, fixing up all the bugs submitted as an anonymous user but with email addresses in, removing duplicates and checking every open bug was still valid took almost two weeks of tedious drudge.

I set up an initial bug workflow within the system that the project developers are still fine tuning to better suit their needs but overall Mantis is proving a very flexible tool. The main deficiencies center around configuration for the projects useage, especially removing unused fields from filters and making the workflow more intuitive..

The resulting system is now getting bug reports submitted again where the sourceforge system had had three in the six months since the forced migration.

The issue tracker is once more a useful tool for the developers allowing us to focus on areas actually causing problems for our users and allowing us to see progress we are making fixing issues.

Overall this was a successful migration and provides a platform the NetSurf project can control where we can offer guarantees to our users about their personal information usage and having a clean rapid interface with no advertisements.

1 January 2014

Tim Retout: 2014

So, happy new year. :) I watched many 30c3 talks via the streams over Christmas - they were awesome. I especially enjoyed finding out (in the Tor talk) that the Internet Watch Foundation need to use Tor when checking out particularly dodgy links online, else people just serve them up pictures of kittens. Today's fail: deciding to set up OpenVPN, then realising the OpenVZ VPS I was planning to use would not support /dev/net/tun. I'm back at work tomorrow, preparing for the January surge of people looking for jobs. Tonight, the first Southampton Perl Mongers meeting of the year.

1 September 2013

Russ Allbery: Review: Redshirts

Review: Redshirts, by John Scalzi
Publisher: Tor
Copyright: June 2012
ISBN: 1-4299-6360-3
Format: Kindle
Pages: 317
The next few reviews deserve a small apology in advance. Normally, I try to write reviews within a few weeks, or at most a month, after reading a book, while it's still fresh in my mind. This is particularly important since I read a lot and sometimes don't have a good memory for specific detail. But this summer has made it completely impossible to keep to any sort of regular writing schedule. As a result, this book, and the subjects of the next few reviews that follow, are books that I read four months ago and am reviewing from memory. Hopefully, I'll still get the details right. Ensign Andrew Dahl is one of the newest crew members of the Universal Union Capital Ship Intrepid. It's his first posting, just out of the Academy. He's one of five new ensigns, expecting a normal tour of duty (albeit with the pride of being on a capital ship of the fleet). But very shortly after arriving on the ship, Andrew and and the other new ensigns discover that something is very... strange. The casualty rate on the ship is surprisingly high (as foreshadowed in a prologue). The other crew members avoid the bridge crew, and particularly away missions, with startling effectiveness (and desperation). And things regularly happen on the ship that, for lack of a better description, simply make no sense. Redshirts starts off as a very obvious Star Trek parody. So obvious, in fact, that it's eyeroll-inducing. The prologue reads exactly like a section of a Star Trek episode written by a third-rate hack, leavened with some of the humor that readers of Scalzi's blog have come to expect. But it's obvious that Scalzi is doing this intentionally, and it doesn't take long before the subject of the book shifts. It's a Star Trek parody that functions much more like a Star Trek parody than any universe should, to the degree that it becomes obvious to the characters. And the characters, despite being straight from central casting, are clearly not expecting their universe to act that way and find this extremely strange. By far the best part of this book is the second quarter: the characters have been introduced, they're starting to figure out what's going on, and they have to do so while dodging away-team duty (or trying to survive it) and participating in the ridiculous plot contrivances that keep occurring. Scalzi mixes a feeling of serious consequences with wry humor and handles the mix deftly. I was both laughing and eagerly reading to see what comes next. The last half of the book, though, has problems. How many problems the reader perceives will, I think, depend on how hard one thinks about the situation Scalzi has set up, or how much one has thought about similar constructs in the past. But I'm afraid the setup does not survive serious scrutiny in ways that, for me at least, seriously damaged my ability to engage with the story. I'm going to avoid saying explicitly where Scalzi goes with this plot, since it is a spoiler (although it's commonly mentioned in discussions of this book on the Internet). But it happens to be an idea that some friends of mine and I pounded on for years in some of our role-playing scenarios, poking at the corners and implications, and subsequently gave up on because it doesn't work for storytelling. The idea is compelling, and on first glance seems like a great story seed. But it's too large. The problem with really big ideas with huge implications is that they can change the nature of a story so much that there's no longer any point of commonality for the reader and the entire nature of story falls apart. And I think that's what happens here. Scalzi handles the deeper implications of the concept by, essentially, ignoring them. He focuses on one narrow set of implications for one specific set of people, largely skips over any discussion of anything larger, and tries a pivot from wry humor (and a bit of slapstick) into serious emotional depth. For me, in part because I'm so familiar with the concept, none of this really worked. The implications of the discovery that drives this story are so much broader and so much more significant than the characters seem to admit that it threw me right out of the story. The ethical implications are absolutely paralyzing, to the point that I don't think you can write a coherent story about them, but ignoring them doesn't work either. It's the kind of discovery that requires action of some kind beyond the attempt by the characters to go back to what they consider a normal life, even if I have a hard time imagining what that action could be. To be fair, he does try to salvage this in the first epilogue, which is all about the implications of his idea and its effects. But it's still too small. That epilogue just barely touches on the implications on a single person, but if one thinks a bit longer, one starts coming up with more and more implications for all of society, for how we talk about story, for how we raise children, for nearly every form of creativity. Implications that involve nasty concepts like slavery and mind control and murder and the nature of consent in ways that are very difficult to avoid. I think Scalzi's attempts at serious emotional connection, including in the other two attached epilogues (which are perspective-shifting close focuses on some of the characters), also suffer from his world setup, but in a different way. The characters are simply too bland to sustain the level of emotional connection he's striving for. They're bland for a very specific reason that is entirely justified by the story that Scalzi is handling, and the implications of that for the primary protagonist were, I thought, one of the best-handled moments of the book. But they're still bland, which undermines the attempted pivot. All that being said, this is not a bad book. Parts of it, particularly the second quarter, are excellent. And even when I wasn't engaging with the characters, Scalzi's writing on a paragraph level is snappy and often quite funny. The first epilogue in particular is hilarious, even if I think it was too limited in the scope of implications that it considers and a bit too meta in where it goes with those implications. And the story is a fun romp, with drama and humor and tension and a fairly satisfying ending. Everyone's a bit too bland and earnest when Scalzi isn't being funny, but he's funny enough that I didn't mind that much. Despite the relatively low rating, I actually recommend this book. If you don't dive into the metaphysical implications, it's a lot of fun, and if you've thought a lot about this particular set of metaphysical implications, it's somewhat satisfying to argue with (and to internally point out all of the hard questions that Scalzi is skipping over). It's particularly fun if you've watched a lot of Star Trek, since Scalzi gets some of the absurdities note-perfect. But be aware that the character reactions to Scalzi's concept don't hold up to much scrutiny. Rating: 7 out of 10

3 August 2013

Steve Langasek: Network services and Upstart

So Debian bug #710747 led to an interesting discussion the other night on IRC which made me realize there are a lot of people who have yet to understand why sysvinit needs replacing. I can't speak to whether upstart solves this bug in particular. The tftpd-hpa package in Ubuntu (and in Debian experimental) does have an upstart job, and I have used tftpd-hpa on systems whose network interfaces are managed entirely with NetworkManager, and I have not seen this bug; but I can't say that this is more than coincidence. However, I can speak to how upstart provides facilities that address the general problem of starting services that require a working network connection to be present before they start up. Traditionally on Linux systems, init scripts could assume that the network connection is up before starting any services for the target runlevel. With LSB dependencies, you could also express this as
Required-Start:    $network
The trouble with both of these approaches is that they assume that there is a single point in time, early in the boot of the system, when the "network" is up. This was an ok assumption 15 years ago, when the Linux kernel initialized all devices serially and all our network configuration was static. But nowadays, you simply cannot rely on systems booting in such a way that the network is fully up shortly after boot - or in a way that you can block the system boot waiting for the network to be up. If all of our services would cope gracefully with being started without the network, this would be a non-issue. Unfortunately, not all network services are written in such a way as to work without a network; nor do they all cope with dynamic changes to interfaces or addresses. While it would be better if these services were written more robustly, since there's no shortage of daemons written this way, it's convenient that upstart provides tools for coping with this behavior in the meantime. Here's a real (simplified) example of an upstart job for a service that needs to wait for a non-loopback interface before it starts:
start on (local-filesystems and net-device-up IFACE!=lo)
stop on runlevel [!2345]
expect fork
respawn
exec nmbd -D
Now, you might also have a service that you only want to start up when a particular network device comes up. This is easily expressed in upstart, and might look like this hypothetical job:
start on net-device-up IFACE=wlan0
stop on net-device-down IFACE=wlan0
expect daemon
respawn
exec mydaemon
If you need to restart a daemon whenever the set of active network connections changes, that's less straightforward. Upstart doesn't have the notion of a 'restart' rule, so you would need two jobs to handle this - one for the daemon itself that starts on the first network connection, and a second job to trigger a restart of the daemon when the network status changes. For the tftpd-hpa case in the abovementioned bug, this might look like:
$ cat /etc/init/tftpd-hpa.conf
start on runlevel [2345]
stop on runlevel [!2345]
expect fork
respawn
script
        if [ -f $ DEFAULTS  ]; then
                . $ DEFAULTS 
        fi
        exec /usr/sbin/in.tftpd --listen  --user $ TFTP_USERNAME  --address $ TFTP_ADDRESS  $ TFTP_OPTIONS  $ TFTP_DIRECTORY 
end script
$ cat /etc/init/tftpd-hpa-restart.conf
start on net-device-up
task
instance $IFACE
script
        status tftpd-hpa   grep -q start/   stop
        restart tftpd-hpa
end script
For this case, upstart doesn't provide quite so great an advantage. It's nice to be able to use upstart natively for both pieces, but you can do the same thing with an init script plus an if-up.d script for ifupdown, which is what maintainers do today. I think adding a restart on stanza to upstart in the future to address this would be a good idea. Though in any event, this is far simpler to do with upstart than with any if-up.d script I've ever seen for managing an initscript-based service. Between the more friendly declarative syntax, and the enhanced semantics for expressing when to run (or restart) a job, upstart offers clear advantages over other init systems.

2 July 2013

Dominique Dumont: Seeking help to update OpenSSH configuration editor (Config::Model)

Hello OpenSsh 6.0 has been out for more than a year and the OpenSSH configuration editor still lacks the new parameters introduced by this release (like AllowAgentForwarding). Technically, the task is not difficult, but I lack time to address it: I m swamped by real life job, maintenance of Config::Model, Debian packaging activities So I m looking for volunteer(s) to help me on Config::Model. Updating OpenSSH model is a great way to start ! If you want to help, please: Be sure that you won t be forgotten in the change log ;-) Feel free to contact config-model-users at lists.sourceforge.net to get help, exchange ideas, or to discuss how to handle deprecation or upgrade of OpenSSH parameters All the best update: OpenSSH project name was corrected after Rafal s comment.
Tagged: Config::Model, OpenSsh, Perl

8 June 2013

Michael Stapelberg: Survey answers part 1: systemd has too many dependencies, or it is bloated, or it does too many things, or is too complex

This blog post is the first of a series of posts dealing with the results of the Debian systemd survey. I intend to give a presentation at DebConf 2013, too, so you could either read my posts, or watch the talk, or both :-). The top concern shared by most people is:
systemd has too many dependencies, or it is bloated, or it does too many things, or is too complex
Now this concern actually has a lot of different facets, and I am trying to share my opinion on each of them. systemd has too many dependencies First, let s start with too many dependencies , because that is easy to check and reason about. I have created a document which lists all dependencies of the systemd binary itself (pid 1) and all the binaries which are currently shipped by the systemd Debian package. If you don t want to take my word for granted, please read that document. Have you read the document? Very nice! As you can see, the systemd binary itself has 10 dependencies (excluding libc). Now, the question is, what is bad about dependencies? Why do people list dependencies as a top concern?
  1. Cyclic dependencies. When you hear that your init system depends on DBus, you might argue that there is a cyclic dependency here, because DBus needs to be started by the init system. However, systemd does NOT depend on dbus-daemon (!) to boot your machine. Instead of using the system bus, it uses a private UNIX socket. Therefore, systemd uses DBus merely as a serialization format for IPC between its different processes. Only when you want to access systemd via its API as a user (non-root), you actually use the system bus. Since we are talking about DBus: DBus provides a well-tested serialization format and IPC mechanism so that systemd doesn t have to reinvent the wheel and instead benefits from wide support within languages.
  2. Complicated code. I feel like there is the implicit assumption that lots of dependencies correlate with complicated code that is easy to break. I encourage you to have a look at systemd s source code: look for the places where specific libraries are used, e.g. enforce_user which uses libcap. You ll notice that the code is not complex and usage of the libraries is clear.
  3. Software dragging in lots of library packages. The libraries which systemd uses are already in widespread use (e.g. DBus, udev, selinux, libcap, pcre, ). On a typical Debian installation, only very few of them will be dragged in by systemd, if at all. As an example, on a fresh Debian Wheezy installation, less than 10 packages will end up on your machine when running apt-get install systemd .
  4. More memory use. The Linux kernel maps libraries into memory only once, no matter how many processes use them. As stated in the dependency list, on machines where the libraries are not already loaded, systemd brings in about 500 KiB of additional memory-mapped libraries in the worst case. On the machines we have these days, this is a reasonable cost to pay for all the benefits systemd gets us. This holds true on embedded systems with only a few MiB of RAM and especially on typical workstations with 8+ GiB of RAM.
systemd is bloated Now, let s talk about bloat. Again, this is a point which has many facets. I d like to quote the Wikipedia definition of software bloat:
Software bloat is a process whereby successive versions of a computer program become perceptibly slower, use more memory or processing power, or have higher hardware requirements than the previous version whilst making only dubious user-perceptible improvements.
The first part of the definition certainly does not match systemd it is measurably faster than sysvinit. As for memory usage: systemd s RSS is 1.8 MiB, whereas sysvinit uses 0.8 MiB. As I argued on the More memory use point in the dependencies section, I think the additional resource cost is well worth the benefits. Also note that systemd s features are NOT all implemented in the binary which is PID 1. As explained in the dependency list, systemd consists of many cleanly separated binaries. So if a new version of systemd gathers an additional feature, this does not mean that your PID 1 will be bigger. While systemd runs on any hardware, it has an indirect hardware requirement: it requires some Linux kernel features (which are all enabled in Debian kernels). That might rule out usage of systemd on really old embedded hardware where you don t have a chance to update the kernel. While it is sad that those machines cannot profit from systemd, switching to systemd as a default has no downside either: Debian continues to support sysvinit for quite some time, so these machines will continue to work even with upcoming Debian versions. systemd does too many things The Wikipedia definition continues:
[ ] perceived bloat can occur from the software servicing a large, diverse marketplace with many differing requirements. Most end users will feel they only need some limited subset of the available functions and will regard the others as unnecessary bloat, even if people with different requirements do use them.
I think the last part of the Wikipedia definition applies to systemd: it does service a large and diverse marketplace . That marketplace is the entirety of existing software which is started by an init system. Also, systemd can be used on a wide range of hardware (embedded devices, tablets, phones, notebooks, desktops, servers) which requires different features. As an example: on a desktop system you typically don t care strongly about a watchdog feature, but on embedded or servers that feature is very handy. Similarly, on a tablet, forward secure sealing of logfiles is not as important as on a server. Therefore, I can understand if you feel that you don t need many of the features systemd provides. But please think of other users and maintainers who are very happy with systemd s benefits. Also note that while systemd supports many things (in separate binaries!), you don t have to use them all. It still makes sense to ship them all in the same package. Take coreutils as another example in that area. The binaries belong together, even though you most likely haven t used all of them (e.g. od, pr, ptx, :-)). systemd is too complex The remaining concern is that systemd is too complex. In my experience, complexity is often inherent to a specific area and one cannot simply make it go away. Instead, there are different models of how that complexity is represented. Think of the monolithic Linux kernel versus the MINIX microkernel. The latter has a very small amount of lines of source in the kernel, but puts the complexity into userspace. The former uses a different approach with more source in the kernel. The arguments between both camps show that neither is clearly right or clearly wrong. In a way, sysvinit represents the MINIX model: it has a small core (the init binary itself), but a lot of complexity in shell scripts and external programs. The fact that solutions are copied from one init script to another leads to lots of subtle errors and makes code reuse really hard. systemd however has more source code in the binaries, but requires only very simple, descriptive, textual configuration instead of complex init scripts. To me, it seems preferable to have the complexity in a single place instead of distributed across lots of people and projects. Conclusion In a way, you are right. systemd centralizes complexity from tons of init scripts into a single place. However, it therefore makes it very easy for maintainers to write service files (equivalent of an init script) and provides a consistent and reliable interface for service management. Furthermore, it is different than sysvinit, and different solutions often seem complex at first. While systemd consumes more resources than sysvinit, it uses them to make more information available about services; its finer-grained service management requires more state-keeping, but in turn offers you more control over your services.

8 May 2013

Steve Langasek: Plymouth is not a bootsplash

Congrats to the Debian release team on the new release of Debian 7.0 (wheezy)! Leading up to the release, a meme making the rounds on Planet Debian has been to play a #newinwheezy game, calling out some of the many new packages in 7.0 that may be interesting to users. While upstart as a package is nothing new in wheezy, the jump to upstart 1.6.1 from 0.6.6 is quite a substantial change. It does bring with it a new package, mountall, which by itself isn't terribly interesting because it just provides an upstart-ish replacement for some core scripts from the initscripts package (essentially, /etc/rcS.d/*mount*). Where things get interesting (and, typically, controversial) is the way in which mountall leverages plymouth to achieve this. What is plymouth? There is a great deal of misunderstanding around plymouth, a fact I was reminded of again while working to get a modern version of upstart into wheezy. When Ubuntu first started requiring plymouth as an essential component of the boot infrastructure, there was a lot of outrage from users, particularly from Ubuntu Server users, who believed this was an attempt to force pretty splash screen graphics down their throats. Nothing could be further from the truth. Plymouth provides a splash screen, but that's not what plymouth is. What plymouth is, is a boot-time I/O multiplexer. And why, you ask, would upstart - or mountall, whose job is just to get the filesystem mounted at boot - need a boot-time I/O multiplexer? Why use plymouth? The simple answer is that, like everything else in a truly event-driven boot system, filesystem mounting is handled in parallel - with no defined order. If a filesystem is missing or fails an fsck, mountall may need to interact with the user to decide how to handle it. And if there's more than one missing or broken filesystem, and these are all being found in parallel, there needs to be a way to associate each answer from the user to the corresponding question from mountall, to avoid crossed signals... and lost data. One possible way to handle this would be for mountall to serialize the fsck's / mounts. But this is a pretty unsatisfactory answer; all other things (that is, boot reliability) being equal, admins would prefer their systems to boot as fast as possible, so that they can get back to being useful to users. So we reject the idea of solving the problem of serializing prompts by making mountall serialize all its filesystem checks. Another option would be to have mountall prompt directly on the console, doing its own serialization of the prompts (even though successful mounts / fscks continue to be run in parallel). This, too, is not desirable in the general case, both because some users actually would like to have pretty splash screens at boot time, and this would be incompatible with direct console prompting; and because mountall is not the only piece of software that need to prompt at boot time (see also: cryptsetup). Plymouth: not just a pretty face Enter plymouth, which provides the framework for serializing requests to the user while booting. It can provide a graphical boot splash, yes; ironically, even its own homepage suggests that this is its purpose. But it can also provide a text-only console interface, which is what you get automatically when booting without a splash boot argument, or even handle I/O over a serial console. Which is why, contrary to the initial intuitions of the s390 porters upon seeing this package, plymouth is available for all of Debian's Linux architectures in wheezy, s390 and s390x included, providing a consistent architecture for boot-time I/O for systems that need it - which is any machine using a modern boot system, such as upstart or systemd. Room for improvement Now, having a coherent architecture for your boot I/O is one thing; having a bug-free splash screen is another. The experience of plymouth in Ubuntu has certainly not been bug-free, with plymouth making significant demands of the kernel video layer. Recently, the binary video driver packages in Ubuntu have started to blacklist the framebuffer kernel driver entirely due to stability concerns, making plymouth splash screens a non-starter for users of these drivers and regressing the boot experience. One solution for this would be to have plymouth offload the video handling complexity to something more reliable and better tested. Plymouth does already have an X backend, but we don't use that in Ubuntu because even if we do have an X server, it normally starts much later than when we would want to display the splash screen. With Mir on the horizon for Ubuntu, however, and its clean separation between system and session compositors, it's possible that using a Mir backend - that can continue running even after the greeter has started, unlike the current situation where plymouth has to cede the console to the display manager when it starts - will become an appealing option. This, too, is not without its downsides. Needing to load plymouth when using crypted root filesystems already makes for a bloated initramfs; adding a system compositor to the initramfs won't make it any better, and introduces further questions about how to hand off between initramfs and root fs. Keeping your system compositor running from the initramfs post-boot isn't really ideal, particularly for low-memory systems; whereas killing the system compositor and restarting it will make it harder to provide a flicker-free experience. But for all that, it does have its architectural appeal, as it lets us use plymouth as long as we need to after boot. As the concept of static runlevels becomes increasingly obsolete in the face of dynamic systems, we need to design for the world where the distinction between "booting" and "booted" doesn't mean what it once did.

11 April 2013

Russ Allbery: Review: Familiar

Review: Familiar, by J. Robert Lennon
Publisher: Greywolf
Copyright: 2012
ISBN: 1-55597-535-6
Format: Hardcover
Pages: 205
This is the first book of an experiment. I'm fairly well-read in science fiction and fantasy and increasingly well-read in non-fiction of interest (although there's always far more of that than I'll get to in a lifetime), but woefully unfamiliar with what's called "mainstream" literature. Under the principal that things people are excited about are probably exciting, I've wanted to read more and understand the appeal. Powell's, which I like to support anyway, has a very nice (albeit somewhat expensive) book club called Indiespensable, which sends its subscribers very nice editions of new works that Powell's thinks are interesting, with a special focus on independent publishers. So I signed up and hope to stick with it for at least a year. (The trick will be fitting these books in amongst my regular reading.) Familiar is the first feature selection I received. Elisa Brown is a mother with a dead son and a living one, a failing marriage, an affair, and a life that is, in short, falling apart. Then, while driving back from her annual pilgrimage from her son's grave, the world seems to twist and change. She finds herself dressed for business, wearing a nametag and apparently coming back from a work-related convention, driving a car that's entirely unfamiliar to her. When she gets home, everything else has changed too: her marriage seems to be on firmer ground, but based on rules she doesn't understand. She has a different job, different clothes, a different body in some subtle ways. And both of her sons are alive. I'm going to have to have a long argument with myself about where to (meaninglessly) categorize this on my review site, since in construction it is an alternate reality story and therefore a standard SF trope. Any SF reader is going to immediately assume Elisa has somehow been transported into an alternate reality with a key divergence point from her own. But that's not Lennon's focus. He stays ambiguous on the question of whether this is really happening or whether Elisa had some sort of nervous breakdown, and while some amount of investigation of the situation does take place, it's the sort of investigation that an average person with no access to special resources or scientific knowledge and a completely unbelievable story would be able to do: Internet conspiracy chatrooms and some rather dodgy characters. The focus is instead on Elisa's reaction to the situation, her choices about how to treat this new life, and on how she processes her complex emotions about her family and herself. I had profoundly mixed feelings about this book when I finished it, and revisiting it to review it, I still do. The writing is excellent: spare, evocative, and enjoyable to read. Lennon has a knack for subjective description of emotion and physical experience. The reader feels Elisa's deep discomfort with her changed body and her changed car, her swings between closed-off emotions and sudden emotional connection with a specific situation, and her struggle with the baffling question of how to come to terms with a whole new life. The part of the book from about the middle to nearly the end is excellent. Video games make an appearance and are handled surprisingly well. And when Elisa starts being blunt with people, I found myself both liking her and caring about what happens to her. On the other hand, Familiar also has some serious problems, and one of the biggest is the reaction I feared I'd have to mainstream literature: until Elisa started opening up and taking action, I found it extremely difficult to care about anyone in this book. They're all so profoundly petty, so closed off and engrossed in what seem like depressing and utterly boring lives. I'm sure that some of this is intentional and is there to lay the groundwork for Elisa's own self-discovery, but even towards the end of that self-discovery, everything here is so relentlessly middle-class suburbia that I felt stifled just reading about it. I think it's telling that no one in this book ever seems to have any substantial problem with money, or even with work. Elisa walks into a job that she's never done before and within a few weeks is doing it so well that she can take large amounts of time to wander around for plot purposes. This is a book about highly privileged people being miserable in a bubble. While those people certainly do exist, and I can believe that they act like this, I'm not sure how much I want to read about them. Thankfully, the plot does lead Elisa to poke some holes in that bubble, if never get out of it entirely. This is also another one of those stories in which every character has massive communication problems. Now, this deserves some caveats: Elisa's communication problems with her husband are part of the problem that starts the book and are clearly intentional, as are her communication difficulties with her children. And she's not really close enough to anyone to confide in them. But even with those caveats, no one in this book really talks to anyone else. It's amazing that anyone forms any connections at all, given how many walls and barriers they have around themselves. As someone with a bit of a thing for communication, this drove me nuts to read about, particularly in the first half of the book. But the worst problem is that Lennon completely blows the ending. And by that I don't just mean that I disliked the ending. I mean the ending is so unbelievable and so contrary to the entire rest of the book, at least the way I was reading and understanding it, that I think Familiar is a much better novel if you just remove the final scene entirely. It was such a bizarre and unnecessary twist that I found it infuriating. I don't want to spoil an ending, even a bad ending, so I'll only say this: it felt to me like Lennon just wasn't comfortable with his setting and plot driver and couldn't leave it alone. I think an experienced SF author wouldn't have made this mistake. There were two obvious possible conclusions to draw from the setting, plus a few interesting combinations, and I think someone comfortable with this sort of alternate reality story would have taken one of those options, any of which would have been a reasonable dismount for the plot. Alternately, they could have left it entirely ambiguous to the end and explored why the explanation may not actually matter. But Lennon seemed to me to have a tin ear for plausibility and for the normal flow of this sort of story and seems to have taken it as license for arbitrary events, thus completely violating the internal consistency and emergent rules that he'd spent the rest of the book building. I've mostly talked about my reactions to the characters and the writing and have not said much about the plot. That's somewhat intentional, since figuring out where the story will go is one of the best parts of this book. It's surprisingly tense and well-crafted for not having that much inherent dramatic tension. The excellent writing kept me reading through the first part, when I hated everyone in the story, and then Elisa started taking responsibility for her own life and actions and I started really enjoying the book while being constantly surprised. I think it's the sort of story that's best to take without too much foreknowledge of where it's going. I'm going to call this first experiment a qualfied success. Familiar was certainly interesting to read, and quite different from what I normally read despite the SF premise. If it weren't for the ending, I'd be recommending it to other people. Rating: 6 out of 10

1 April 2013

Andrew Pollock: [life/repatexpat] On Queensland electricity retailers

So one of the first things I need to do when I get my apartment today is get the electricity on. I've actually already ordered ADSL, which shows you what I think is more important, but without electricity, there is no Internet... It would appear that in 2007, Queensland opened up its electricity and gas markets to "full retail competition". According to Energex, they do the generation, distribution, and the retailer handles connections, disconnections, billing and green energy. I still need to deal directly with Energex for power outages, reporting faulty street lights, requesting a tree that's near power lines be trimmed, and so on. So if the retailers have no say in what price they're paying for the electricity that's being generated by Energex, I'm struggling to see what their point is. I guess they get to differentiate on customer service, but really, that's it? It was also really hard to find a canonical list of retailers to choose from. I would have thought that'd be linked off Energex's "Choosing your electricity retailer" page, but no, it's buried in the FAQ. Now it comes down to a case of doing a comparison between 11 retailers and trying to choose one. Or just going with the first one on the list. It's just electricity, people. It's a utility. I do not want to expend as much time on choosing an electricity retailer as I would my ISP (interestingly, it looks like Dodo has gotten in on the electricity retailing act). But one could say that an upside of having jet lag and being awake since 3am, is that one has time to comparison shop the 11 electricity retailers, except I won't. I'll just write this blog post instead. I have heard of one horror story, where a homeowner returned to living in his house, and had a nightmare time with one retailer because the previous occupants had an outstanding debt with that retailer, and he was trying to get a connection going with a different retailer, and lots of hilarity ensued. Except it wasn't hilarious. So I'll have to keep an eye on that. My current thought is to go with AGL, because What's pretty crazy is that at no point from quickly skimming AGL's landing page for Queensland pricing is there any indication of kWh or an actual price for anything. There's lots of noise about discounts, and flexibility, and other nonsense, but it seems like the actual cost of energy is so buried, it's incredible. It seems to be all about locking in on a contract, which is somewhat amazing. This is just electricity, but they seem to have turned this into a mobile phone plan type of situation. Amazing. After more digging, it looks like I can pay 5.5 cents per kWh for 100% green energy. Maybe.

23 March 2013

Dominique Dumont: Next version of Config::Model will use asynchronous check

Hello To check the validity of Debian dependencies in Debian package, Config::Model queries a remote web server to get the list of package version known to Debian. The first version of this check did sequential requests: when the cache was not very fresh, I had to wait more than 60s before getting results for complex package like padre. That was very frustrating (but less frustrating than checking package names manually) For the following version, I addedhacked AnyEvent in Dpkg model to run parrallel queries. This went much faster but gaves weird results: before getting a response from Debian server, the packages were flagged as unknown. To get consistent results, running twice cme check dpkg was required. So, at new year, I ve decided to bite the bullet and implement correctly value checks with asynchronous queries to remote server. This new feature is now ready and will be delivered in Config::Model 2.030. cme will now return consistent results. This new release is mostly backward compatible. You may notice some quirks with some other modules based on Config::Model: These quirks will disappear once these modules are updated. This should not be long since all the updates are ready in github. All the best
Tagged: Config::Model, debian, dpkg, Perl

6 February 2013

Aigars Mahinovs: Pensijas prognoze 18.43

Update: I apologise for spamming all the nice people on Planet Debian with this unrelated post in foreign language it got mis-tagged and (although I tried classifying it correctly and removing it from the separate Debian-only RSS feed going to the Debian Planet) Planet software just keeps it around until admins get a spare moment to remove it manually. The basic idea of the post is that there is a e-government service by the Latvian government that provides any citizen a way to check what their pension would be, but it is kinda useless, because it calculates what you pension would be if you decided to retire right now. So in the post below I provide a few simple steps on how to estimate what the real pension could be (in todays currency values) if a person kept paying into the system at the same rate until the normal retirement age. Latvija.lv ir r ks savas n kotnes pensijas prognoz anai, ta u tas ir, maigi izsakoties, bezj dz gs tiem, kas nepl no iet pensij tuv k gada laik . tri apskatoties atteic gos likumus un noteikumus es nonacu pie lietder g kas metodes. Vispirms m s varam dro i ignor t pensiju indeks ciju un visas ar to saist t s pensiju kapit la p rr in anas vai iemaksu palielin anos ir skaidrs, ka gadiem ejot uz priek u b s kaut k ds infl cijas procents, kura d palielin sies algas, palielin sies iemaksas pensiju kapit l un tiks p rr in tas vis das izmaksas, ta u t pati infl cija tie i ietekm s ar to cik v rt gs b s katrs gala pensijas lats taj br d , kad pensija tiks sa emta, l dz ar to infl cijas efekts tiek neitraliz ts, ja m s v rt jam gala ciparu br a cenu kontekst t.i. ja gal san ks, ka pensija b s 400Ls, tad to vai tas ir daudz vai maz j dom br a cen s, nevis m inot izdom t k das b cenas p c 30 gadiem. To visu emot v r pensijas apr ins stipri vienk r ojas:
  1. Pa emam atskaiti no t Latvija.lv pakalpojuma. Tas mums dos uz o br di uzkr to pensijas kapit lu (U) un aptuvenu ciparu par to k ds ir kapit la ikgad jais piaugums (P)
  2. Apr inam cik v l gadi ir paliku i l dz pension an s vecumam (t) t.i. l dz 62 gadiem. Es pat r in tos ar to, ka ap to laiku m r a vecums jau b s 65.
  3. Sar inam to cik b s aptuvenais pensijas kapit ls pension anas vecuma, ja m s turpinam iemaksas k tagad K = U + ( P * t )
  4. Apr inam k da san k ikm ne a pensija no da pensijas kapit la Pensija = K / G / 12, kur G ir emts no pension an s vecumam atbilsto s rindi as MK noteikumos, piem ram 62 gadiem tas ir 18.43
Piez me: G ir statisti u apr in ta prognoze par to cik vid ji gadus Latvijas iedz vot ji nodz vo ja vi i ir sasniegu i attiec go pension an s vecumu, t.i. t ir prognoze par to cik gadus vid ji jums b s tas prieks baud t pensiju. Ja rezult ts san k par k mazs, tad j em v r , ka summa neiek auj dividendes no 2 un 3 pensijas l me a, kas p ris gadu desmitu laik var b t t ri iespaid gs. Da as prognozes ir t das, ka 2ais un 3ais pensijas l menis kop var tu dod v l aptuveni t du pa u pensiju kl t k 1ais l menis. Ta u tas ir stipri atkar gs no fondu tirgus ilglaic gas dinamikas. Man to visu apr inot san k ap 400 Ls pensijas prognoze. Ja piepild s paredz jumi par 2o un 3o pensijas l meni, tad, ja es iedom jos sevi tagad k pension ru ar 600-800 Ls pensiju zin nav ne vainas. :)

Biella Coleman: Edward Tufte was a phreak

It has been so very long since I have left a trace here. I guess moving to two new countries (Canada and Quebec), starting a new job, working on Anonymous, and finishing my first book was a bit much. I miss this space, not so much because what I write here is any good. But it a handy way for me to keep track of time and what I do and even think. My life feels like a blur at times and hopefully here I can see its rhythms and changes a little more clearly if I occasionally jot things down here. So I thought it would nice to start with something that I found surprising: famed information designer, Edward Tufte, a professor emeritus at Yale was a phone phreak (and there is a stellar new book on the topic by former phreak Phil Lapsley. He spoke about his technological exploration during a sad event, a memorial service in NYC which I attended for the hacker and activist Aaron Swartz. I had my wonderful RA transcribe the speech, so here it is [we may not have the right spelling for some of the individuals so please let us know of any mistakes]:
Edward Tufte s Speech From Aaron Swartz s Memorial
Speech starts 41:00 [video cuts out in beginning]
We would then meet over the years for a long talk every now and then, and my responsibility was to provide him with a reading list, a reading list for life and then about two years ago Quinn had Aaron come to Connecticut and he told me about the four and a half million downloads of scholarly articles and my first question is, Why isn t MIT celebrating this? .
[Video cuts out again]
Obviously helpful in my career there, he then became president of the Mellon foundation, he then retired from the Mellon foundation, but he was asked by the Mellon foundation to handle the problem of JSTOR and Aaron. So I wrote Bill Bullen(sp?) an email about it, I said first that Aaron was a treasure and then I told a personal story about how I had done some illegal hacking and been caught at it and what happened. In 1962, my housemate and I invented the first blue box, that s a device that allows for free, undetectable, unbillable long distance telephone calls. And we got this up and played around with it and the end of our research came when we concluded what was the longest long distance call ever made, which was from Palo Alto to New York time-of-day via Hawaii, well during our experimentation, AT&T, on the second day it turned out, had tapped our phone and uh but it wasn t until about 6 months later when I got a call from the gentleman, AJ Dodge, senior security person at AT&T and I said, I know what you re calling about. and so we met and he said You what you are doing is a crime that would , you know all that. But I knew it wasn t serious because he actually cared about the kind of engineering stuff and complained that the tone signals we were generating were not the standard because they record them and play them back in the network to see what numbers they we were that you were trying to reach, but they couldn t break though the noise of our signal. The upshot of it was that uh oh and he asked why we went off the air after about 3 months, because this was to make long distance telephone calls for free and I said this was because we regarded it as an engineering problem and we made the longest long distance call and so that was it. So the deal was, as I explained in my email to Bill Bullen, that we wouldn t try to sell this and we were told, I was told that crime significance would pay a great deal for this, we wouldn t do any more of it and that we would turn our equipment over to AT&T, and so they got a complete vacuum tube isolator kit for making long distance phone calls. But I was grateful for AJ Dodge and I must say, AT&T that they decided not to wreck my life. And so I told Bill Bullen that he had a great opportunity here, to not wreck somebody s life, course he thankfully did the right thing.
Aaron s unique quality was that he was marvelously and vigorously different. There is a scarcity of that. Perhaps we can be all a little more different too.
Thank you very much.

31 January 2013

Kartik Mistry: Mandatory yearly post

* Somehow, I felt that there should be post for atleast to abuse Planet Debian in the new year of 2013. So, what s going on? 1. Finished Full Marathon aka Standard Charted Mumbai Marathon with pathetic timing of 5h55m35s (and felt good at the end). 2. Good thing: Visited family twice in last 2 months. 3. N number of Idlis were consumed. Where, N >= 300. Overall, I m big fan of South Indian food which is more suitable for my stomach. Thanks to running, my junkfood consumption is lesser than I can imagine but you can find me in those places during some weekends to fullfill calories lost in LSD run! 4. Last Sunday, I delivered lecture on Debian Ecosystem , mostly same presentation with additional updates and details, at SVIT, Doddaballapur (technically, some outskirt of BLR) as part of FSMK s 5 days event. It was well received from the applause I got. Photos are still lying in memory card and probably will not come out until next usage of camera. 5. I m very silent on Debian packaging and someone will hit me for pending tasks, but enjoying packaging and fixing tlsdate/torbirdy for Debian lately to get motivation. Other than that, few uploads were done.

22 January 2013

Matthew Palmer: When is a guess not a guess?

when it s a prediction . In the 4th January edition of the Guardian Weekly , the front page story, entitled Meet the world s new boomers 1 contained this little gem:
Back in 2006, [PricewaterhouseCoopers] made some forecasts about what the global economy might look like in 2050, and it has now updated the predictions in the light of the financial crisis and its aftermath.
Delightful. They made some forecasts about what the global economy might look like. Given that they clearly didn t include any impact of the GFC in their forecasts, it clearly wasn t a particularly accurate forecast. Y know what an inaccurate prediction is called? Guesswork. Let s call a spade a spade here. I see this all the time, and it s starting to shit me. People making predictions and forecasts and projections hither and yon, and they re almost always complete bollocks, and they never get called on it. I read the Greater Fool blog now and then, and that blog is chock full of examples of people making predictions which have very little chance of being in any way accurate. While Dr Ben Goldacre and others are making inroads into requiring full disclosure in clinical trials, I m not aware of anyone taking a similar stand against charlatans making dodgy-as-hell predictions over and over again, with the sole purpose of getting attention, without any responsibility for the accuracy of those predictions. Is anyone aware of anyone doing work in this area, or do I need to register badpredictions.net and start calling out dodginess?

4 January 2013

Russell Coker: Links January 2013

AreWomenHuman has an interesting article about ViolentAcrez and the wide support for trolling (including by media corporations) [1]. Chrys Stevenson wrote an important article for the ABC about the fundamentalist Christians who are trying to take over the Australian education system [2]. Tavi Gevinson gave an interesting TED talk titled A teen just trying to figure it out about her work starting Rookie magazine and her ideas about feminism [3]. Burt Rutan gave an interesting and inspiring TED talk about the future of space expploration [4]. One of his interesting points is that fun really is defendable in regard to tourism paying for the development of other space industries. Stephen Petranek gave an interesting TED talk about how to prepare for some disasters that could kill a significant portion of the world s population [5]. Some of these are risks of human extinction, we really need to spend some money on it. John Wilbanks gave an intresting TED talk about the way that current informed consent laws prevent large-scale medical research [6]. He says I live in a web world where when you share things beautiful stuff happens, not bad stuff . Joey Hess was interviewed for The Setup and the interview sparked a very interesting Hacker News discussion about workflow for software development [7]. Like most developers I prefer large screens with high resolution, I have an EeePC 701 which works reasonably well for an ultra-portable system but I largely don t use it now I have an Android phone (extremely portable and totally awful input usually beats moderately portable and mostly awful input for me). But Joey s methods are interesting and it seems that for some people different systems give the best result. Jeff Masters gave an insightful TED talk about the weather disasters that may seriously impact the US in the next 30 years [8]. Governments really need to start preparing for such things, some of them are really cheap to mitigate if work is started early. Bryan Stevenson gave an inspiring TED talk about the lack of justice in the US justice system [9]. Wouter Verhelst wrote an insightful article about some of the criticisms of Linux from Windows users [10]. He references a slightly satirical post he previously wrote about why Windows isn t ready for desktop use. Paul Carr wrote an interesting article comparing disruptive business practices of dot-com companies to the more extreme aspects of Ayn Rand s doctrine [11]. In reading some of the links from that article I discovered that Ayn Rand was even more of a sociopath than I had previously realised. Lindy West gave an amazing Back Fence PDX talk about dealing with nasty blog comments from the PUA/MRA communities [12]. After investigating them she just feels sorry for the trolls who s lives suck. Hang from the Vlogbrothers explains gender, sex, sexual orientation, etc [13]. Rick Falkvinge wrote an interesting article about recent political news from Brazil, they had a proposed law that was very positive for liberty on the Internet but it was sabotaged by the media and telcos [14]. We should try to avoid paying any money to the media industry so that they can go away sooner. Amy Cuddy gave an interesting TED talk about body language, power, and the imposter syndrome [15]. Caleb Chung gave an interesting TED talk about toy design which focussed on Pleo a robotic dinosaur with a SD card and USB socket to allow easy reprogramming by the user [16].

Next.

Previous.